Goto

Collaborating Authors

 scale ai


Revealed: The richest and youngest AI-billionaires making fortune from the big tech boom

Daily Mail - Science & tech

From helping you answer emails to translating legal documents, artificial intelligence is now a part of almost all facets of life. Meanwhile, organisations from Microsoft and Apple to the NHS have piled vast sums of funding into the latest intelligent software. And for the few people behind this AI boom, there have been enormous profits to be made. Leading the pack as the richest of new AI billionaires is Jensen Huang, CEO of chipmaker Nvidia, with a staggering net-worth of 113 billion ( 151bn). Mr Huang joins several monumental big tech figures, such as Meta's Mark Zuckerberg and Elon Musk, who have recently made huge investments in AI.


Seriously, What Is 'Superintelligence'?

WIRED

Meta just announced a major move in its AI efforts--investing in Scale AI and building a superintelligence AI research lab. While Meta has been trying to keep up with big names in the AI race, such as OpenAI, Anthropic and Google, the company's new strategy includes dropping some serious cash to acquire talent and invest in Scale AI. Today on the show, we dive into the deal between Meta and Scale AI, including what Meta aims to get out of investment, and we ask the question we are all wondering: What is superhuman intelligence, anyway? Write to us at uncannyvalley@wired.com. You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link.


Meta to announce 15bn investment in bid to achieve computerised 'superintelligence'

The Guardian

Meta is to announce a 15bn ( 11bn) bid to achieve computerised "superintelligence", according to multiple reports. The Silicon Valley race to dominate artificial intelligence is speeding up despite the patchy performance of many existing AI systems. Mark Zuckerberg, Meta's chief executive, is expected to announce the company will buy a 49% stake in Scale AI, a startup led by Alexandr Wang and co-founded by Lucy Guo, in a move described by one Silicon Valley analyst as the action of "a wartime CEO". Superintelligence is described as a type of AI that can perform better than humans at all tasks. Currently AI cannot reach the same level as humans in all tasks, a state known as artificial general intelligence (AGI).


Future of AI in focus at Web Summit Qatar 2025

Al Jazeera

The future of artificial intelligence (AI) has been the focus of tech entrepreneurs and financial backers gathered in Doha for the second annual Web Summit hosted by Qatar. The four-day digital technology and emerging innovation summit kicked off its second day on Monday, with attendees eyeing an AI environment being transformed rapidly. Leading entrepreneurs from around the world, including Alexander Wang, founder and CEO of Scale AI, and Alexis Ohanian, co-founder of Reddit and general partner at Seven Seven Six, took centre stage at the event on the opening day. Reporting from Doha, Al Jazeera's Colin Baker said the summit is grappling with questions over the future of AI amid "companies and investors that are changing that landscape more rapidly than we expected". The United States and China are leading in preparedness for AI, said Wang of US company Scale AI.


AI Models Are Getting Smarter. New Tests Are Racing to Catch Up

TIME - Tech

Despite their expertise, AI developers don't always know what their most advanced systems are capable of--at least, not at first. To find out, systems are subjected to a range of tests--often called evaluations, or'evals'--designed to tease out their limits. But due to rapid progress in the field, today's systems regularly achieve top scores on many popular tests, including SATs and the U.S. bar exam, making it harder to judge just how quickly they are improving. A new set of much more challenging evals has emerged in response, created by companies, nonprofits, and governments. Yet even on the most advanced evals, AI systems are making astonishing progress.


The Low-Paid Humans Behind AI's Smarts Ask Biden to Free Them From 'Modern Day Slavery'

WIRED

AI projects like OpenAI's ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industry--contractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop "systemically abusing and exploiting African workers." Most of the letter's signatories are from Kenya, a hub for tech outsourcing, whose president, William Ruto, is visiting the US this week. The workers allege that the practices of companies like Meta, OpenAI, and data provider Scale AI "amount to modern day slavery." The companies did not immediately respond to a request for comment.


Side Hustle or Scam? What to Know About Data Annotation Work

TIME - Tech

On TikTok, Reddit, and elsewhere, posts are popping up from users claiming they're making 20 per hour--or more--completing small tasks in their spare time on sites such as DataAnnotation.tech, As companies have rushed to build AI models, the demand for "data annotation" and "data labeling" work has increased. Workers complete tasks such as writing and coding, which tech companies then use to develop artificial intelligence systems, which are trained using large numbers of example data points. Some models require all of their input data to be labeled by humans, a technique referred to as "supervised learning." And while "unsupervised learning," in which AI models are fed unlabeled data, is becoming increasingly popular, AI systems trained using unsupervised learning still often require a final step involving data labeled by humans.


Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems

TIME - Tech

A study published Tuesday provides a newly-developed way to measure whether an AI model contains potentially hazardous knowledge, along with a technique for removing the knowledge from an AI system while leaving the rest of the model relatively intact. Together, the findings could help prevent AI models from being used to carry out cyberattacks and deploy bioweapons. The study was conducted by researchers from Scale AI, an AI training data provider, and the Center for AI Safety, a nonprofit, along with a consortium of more than 20 experts in biosecurity, chemical weapons, and cybersecurity. The subject matter experts generated a set of questions that, taken together, could assess whether an AI model can assist in efforts to create and deploy weapons of mass destruction. The researchers from the Center for AI Safety, building on previous work that helps to understand how AI models represent concepts, developed the "mind wipe" technique.


AI Is Coming for the Experts. First, It Needs Their Help

WIRED

Jay fell in love with math at boarding school after a supportive physics teacher introduced him to the joy of complex calculus. He went on to study physics and math in college, hoping to one day similarly pass on what he'd learned to a new generation. That chance came in October 2022, when 25-year-old Jay answered a job listing seeking a math expert to grade equations through an online platform. But he would not be inspiring budding young mathematicians like his past self. He would instead be training an artificial intelligence system that may eventually make his expertise obsolete.


Top AI Companies Join Government Effort to Set Safety Standards

TIME - Tech

The top U.S. artificial intelligence companies will participate in a government-led effort intended to craft federal standards on the technology to ensure that it's deployed safely and responsibly, the Commerce Department said Thursday. OpenAI, Anthropic, Microsoft Corp., Meta Platforms Inc. and Alphabet Inc.'s Google are among more than 200 members of a newly established AI Safety Institute Consortium under the department, Commerce Secretary Gina Raimondo said. Also on the list are Apple Inc., Amazon Inc., Hugging Face Inc. and IBM. The top industry players will work with the National Institute of Standards and Technology, a body within Commerce, along with other technology companies, civil society groups, academics, and state and local government officials to establish safety standards regarding AI. "President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem," Raimondo said in a statement. Major tech companies have been engaging with the Biden administration and policymakers in Washington on regulating AI as the technology rapidly advances and is poised to disrupt industries.